With the development of natural language processing techniques(NLP), automatic diagnosis of eye diseases using ophthalmology electronic medical records (OEMR) has become possible. It aims to evaluate the condition of both eyes of a patient respectively, and we formulate it as a particular multi-label classification task in this paper. Although there are a few related studies in other diseases, automatic diagnosis of eye diseases exhibits unique characteristics. First, descriptions of both eyes are mixed up in OEMR documents, with both free text and templated asymptomatic descriptions, resulting in sparsity and clutter of information. Second, OEMR documents contain multiple parts of descriptions and have long document lengths. Third, it is critical to provide explainability to the disease diagnosis model. To overcome those challenges, we present an effective automatic eye disease diagnosis framework, NEEDED. In this framework, a preprocessing module is integrated to improve the density and quality of information. Then, we design a hierarchical transformer structure for learning the contextualized representations of each sentence in the OEMR document. For the diagnosis part, we propose an attention-based predictor that enables traceable diagnosis by obtaining disease-specific information. Experiments on the real dataset and comparison with several baseline models show the advantage and explainability of our framework.
translated by 谷歌翻译
Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data. However, the existing works fail to address all these practical concerns in FL: limited communication resources, dynamic network conditions and heterogeneous client properties, which slow down the convergence of FL. To tackle the above challenges, we propose a heterogeneity-aware FL framework, called FedCG, with adaptive client selection and gradient compression. Specifically, the parameter server (PS) selects a representative client subset considering statistical heterogeneity and sends the global model to them. After local training, these selected clients upload compressed model updates matching their capabilities to the PS for aggregation, which significantly alleviates the communication load and mitigates the straggler effect. We theoretically analyze the impact of both client selection and gradient compression on convergence performance. Guided by the derived convergence rate, we develop an iteration-based algorithm to jointly optimize client selection and compression ratio decision using submodular maximization and linear programming. Extensive experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3$\times$ speedup compared to other methods.
translated by 谷歌翻译
With the growth of high-dimensional sparse data in web-scale recommender systems, the computational cost to learn high-order feature interaction in CTR prediction task largely increases, which limits the use of high-order interaction models in real industrial applications. Some recent knowledge distillation based methods transfer knowledge from complex teacher models to shallow student models for accelerating the online model inference. However, they suffer from the degradation of model accuracy in knowledge distillation process. It is challenging to balance the efficiency and effectiveness of the shallow student models. To address this problem, we propose a Directed Acyclic Graph Factorization Machine (KD-DAGFM) to learn the high-order feature interactions from existing complex interaction models for CTR prediction via Knowledge Distillation. The proposed lightweight student model DAGFM can learn arbitrary explicit feature interactions from teacher networks, which achieves approximately lossless performance and is proved by a dynamic programming algorithm. Besides, an improved general model KD-DAGFM+ is shown to be effective in distilling both explicit and implicit feature interactions from any complex teacher model. Extensive experiments are conducted on four real-world datasets, including a large-scale industrial dataset from WeChat platform with billions of feature dimensions. KD-DAGFM achieves the best performance with less than 21.5% FLOPs of the state-of-the-art method on both online and offline experiments, showing the superiority of DAGFM to deal with the industrial scale data in CTR prediction task. Our implementation code is available at: https://github.com/RUCAIBox/DAGFM.
translated by 谷歌翻译
室外(OOD)检测是面向任务的对话框系统中的关键组件,旨在确定查询是否不在预定义的支持的意图集之外。事实证明,先前基于软磁性的检测算法对OOD样品被过度自信。在本文中,我们分析了过度自信的OOD来自由于训练和测试分布之间的不匹配而导致的分布不确定性,这使得该模型无法自信地做出预测,因此可能导致异常软磁得分。我们提出了一个贝叶斯OOD检测框架,以使用Monte-Carlo辍学来校准分布不确定性。我们的方法是灵活的,并且可以轻松地插入现有的基于软磁性的基线和增益33.33 \%OOD F1改进,而与MSP相比仅增加了0.41 \%的推理时间。进一步的分析表明,贝叶斯学习对OOD检测的有效性。
translated by 谷歌翻译
模块化设计是未来大型空间设施的On On On构造技术的基础。标准界面是未来空间机器人系统和空间设施模块化设计的关键技术。本文介绍了Petlock的设计和测试,标准和测试无性别界面可以在未来的模块化空间机器人操纵器和航天器之间传递机械载荷,功率和数据。Petlock采用完全无性别的设计,包括连接面,锁定机制,数据和功率接口。连接表面提供了较大的翻译和旋转错位耐受性,由于其120度对称和3D形状的设计。锁定机制具有三个锁定引脚撤回结构设计,这是简单可靠的。高锁定力,高容忍度,高可靠性和低成本的优势,Petloc K在未来的轨道施工任务中具有很大的应用潜力。
translated by 谷歌翻译
向前和向后触及逆运动学(FABRIK)是一种启发式逆运动求解器,逐渐应用于具有快速收敛和生成更真实配置的优势的操纵器。但是,在高误差限制下,Fabrik表现出不稳定的收敛行为,这对于操纵器的实时运动计划是不满意的。在本文中,提出了一种结合Fabrik和顺序二次编程(SQP)算法的新型逆运动学算法,其中Fabrik推迟的关节角度将被视为SQP算法的初始种子,以避免粘在局部最小值中。通过实验评估合并的算法,在高误差约束下,我们的算法比FabRik获得更高的成功率和更快的解决方案时间。此外,联合算法可以在路径跟踪中为UR5和KUKA LBR IIWA 14 R820操纵器生成连续轨迹,而无姿势误差和最终效应器的允许位置误差。
translated by 谷歌翻译
深度神经网络(DNNS)在各个领域都取得了出色的性能。但是,DNNS对对抗性示例(AE)的脆弱性阻碍了他们的部署到关键的安全应用程序。本文提出了一个新颖的AE检测框架,以值得信赖的预测为止。除了通过区分AE的异常关系与其增强版本(即邻居)与两个前景:表示相似性和标签一致性来区分检测。与监督的学习模型相比,使用现成的自我监督学习(SSL)模型用于提取表示形式,并预测其高度信息代表能力的标签。对于干净的样本,它们的表示和预测与邻居密切一致,而AE的邻居差异很大。此外,我们解释了这一观察结果,并表明,通过利用这种差异可以有效地检测到AE。我们为超越的有效性建立了严格的理由。此外,作为一种插件模型,超越的范围可以轻松与受过对抗训练的分类器(ATC)合作,从而实现最先进的(SOTA)鲁棒性精度。实验结果表明,超越表现的基线较大,尤其是在自适应攻击下。在SSL上建立的强大关系网络的授权下,我们发现超出了检测能力和速度方面优于基准。我们的代码将公开可用。
translated by 谷歌翻译
从点云中检测3D对象是一项实用但充满挑战的任务,最近引起了越来越多的关注。在本文中,我们提出了针对3D对象检测的标签引导辅助训练方法(LG3D),该方法是增强现有3D对象检测器的功能学习的辅助网络。具体而言,我们提出了两个新型模块:一个标签 - 通道诱导器,该模块诱导器将框架中的注释和点云映射到特定于任务的表示形式和一个标签 - 知识式插曲器,该标签知识映射器有助于获得原始特征以获得检测临界表示。提出的辅助网络被推理丢弃,因此在测试时间没有额外的计算成本。我们对室内和室外数据集进行了广泛的实验,以验证我们的方法的有效性。例如,我们拟议的LG3D分别在SUN RGB-D和SCANNETV2数据集上将投票人员分别提高了2.5%和3.1%的地图。
translated by 谷歌翻译
训练大型神经网络(NN)模型需要广泛的记忆资源,而激活压缩训练(ACT)是减少训练记忆足迹的一种有前途的方法。本文介绍了GACT,这是一个ACT框架,旨在支持具有有限域知识的通用NN体系结构的广泛机器学习任务。通过分析ACT近似梯度的线性化版本,我们证明了GACT的收敛性,而没有有关操作员类型或模型体系结构的先验知识。为了使训练保持稳定,我们提出了一种算法,该算法通过估计运行时对梯度的影响来决定每个张量的压缩比。我们将GACT实施为Pytorch库,很容易适用于任何NN体系结构。GACT将卷积NN,变压器和图形NNS的激活记忆降低到8.1倍,从而使4.2倍至24.7倍的训练能够较大,而精度损失可忽略不计。
translated by 谷歌翻译
实现通用语言情报是自然语言处理的长期目标,标准评估基准发挥基本和指导作用。我们认为,对于通用语言智能评估,基准本身需要全面和系统。为此,我们提出了Cuge,一种中文语言理解和生成评估基准,具有以下特征:(1)分层基准框架,其中数据集主要选择和组织语言能力 - 任务数据集层次结构。 (2)多级评分策略,其中基于分层框架提供了不同级别的模型性能。为了促进CUGE,我们提供了一个公共排行榜,可以自定义,以支持灵活的模型判断标准。代表性预先训练的语言模型的评估结果表明了对通用语言智能的完善的充足空间。 Cuge在Cuge.baai.ac.cn上公开提供。
translated by 谷歌翻译